24 research outputs found

    Advances in the Bayesian Occupancy Filter framework using robust motion detection technique for dynamic environment monitoring

    Get PDF
    International audienceThe Bayesian Occupancy Filter provides a framework for grid-based monitoring of the dynamic environment. It allows to estimate dynamic grids, containing both information of occupancy and velocity. Clustering such grids then provides detection of the objects in the observed scene. In this paper we present recent improvements in this framework. First, multiple layers from a laser scanner are fused using opinion pool, to deal with conflicting information. Then a fast motion detection technique based on laser data and odometer/IMU information is used to separate the dynamic environment from the static one. This technique instead of performing a complete SLAM (Simultaneous Localization and Mapping) solution, is based on transferring occupancy information between consecutive data grids, the objective is to avoid false positives (static objects) like other DATMO approaches. Finally, we show the integration with Bayesian Occupancy Filter (BOF) and with the subsequent tracking module called Fast Clustering-Tracking Algorithm (FCTA). We especially show the improvements achieved in tracking results after this integration, for an intelligent vehicle application

    Fusion de données multi capteurs pour la détection et le suivi d'objets mobiles à partir d'un véhicule autonome

    Get PDF
    La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen.Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Catalytic Framework: Intersectional Analysis for Community Engagement

    Get PDF
    The Community Engagement Centre (CEC) has been active across a range of diverse urban and rural populations throughout Pakistan and works closely with marginalised communities. The collective nature of Pakistani culture and its social inequities has required the CEC to recognize the intersections that shape contexts and situations, to promote local ownership, empower communities to identify and utilize existing resources for sustainable change, and improve health outcomes. Through an immersive community engagement (CE) strategy, CEC utilises participatory tools to collect stories from communities to understand their lived experiences, barriers and enablers to access, and the dynamics of power that influence these. To understand this complex relationship, a Catalytic Framework that examined the intersections within communities’ narratives was developed. Preliminary review of community narratives collected as part of programmatic operations yielded four significant elements: (1) unique, individual circumstances, (2) aspects of identity, (3) types of discrimination (if present), and (4) larger structures that reinforce exclusion (or enforce inclusion). A unique feature identified within the process of CE was the role of ‘catalysts’ – one or many people who may have transformative potential at any of these levels due to their influence, active facilitation, or agency. This novel framework enables an understanding of the threads of experience and identifying the elements and structures that impact lives of Pakistan’s diverse population. It works by recognizing the visible intersections of class, identity, gender, and power, as well as questioning what remains unarticulated, and thus promotes meaningful community engagement across different cultures and fields

    Catalytic Framework: Intersectional Analysis for Community Engagement

    Get PDF
    The Community Engagement Centre (CEC) has been active across a range of diverse urban and rural populations throughout Pakistan and works closely with marginalised communities. The collective nature of Pakistani culture and its social inequities has required the CEC to recognize the intersections that shape contexts and situations, to promote local ownership, empower communities to identify and utilize existing resources for sustainable change, and improve health outcomes. Through an immersive community engagement (CE) strategy, CEC utilises participatory tools to collect stories from communities to understand their lived experiences, barriers and enablers to access, and the dynamics of power that influence these. To understand this complex relationship, a Catalytic Framework that examined the intersections within communities’ narratives was developed. Preliminary review of community narratives collected as part of programmatic operations yielded four significant elements: (1) unique, individual circumstances, (2) aspects of identity, (3) types of discrimination (if present), and (4) larger structures that reinforce exclusion (or enforce inclusion). A unique feature identified within the process of CE was the role of ‘catalysts’ – one or many people who may have transformative potential at any of these levels due to their influence, active facilitation, or agency. This novel framework enables an understanding of the threads of experience and identifying the elements and structures that impact lives of Pakistan’s diverse population. It works by recognizing the visible intersections of class, identity, gender, and power, as well as questioning what remains unarticulated, and thus promotes meaningful community engagement across different cultures and fields

    Multi sensor data fusion for detection and tracking of moving objects from a dynamic autonomous vehicle

    No full text
    La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen.Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project

    Fusion de données multi capteurs pour la détection et le suivi d'objets mobiles à partir d'un véhicule autonome

    No full text
    Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen

    Improving Moving Objects Tracking Using Road Model for Laser Data

    No full text
    PosterInternational audienceIn this paper we have presented a fast algorithm to detect road borders from laser data. Two local search windows, one on right side of the host vehicle and the other on left, are moved right and left respectively from the current position of vehicle in map. A score function is evaluated to know the presence or absence of the road border in current search window. We have used the detected road border information to reduce false alarms in our previous work on DATMO (detection and tracking of moving objects). We also show how these information can be used to infer drivable area and the presence of intersections on the road. Results on data sets obtained from real demonstrator vehicles show that this technique can be successfully applied in real time

    Online localization and mapping with moving objects detection in dynamic outdoor environments

    No full text
    International audienceIn this paper, we present a real-time algorithm for online simultaneous localization and mapping (SLAM) with detection and tracking of moving objects (DATMO) in dynamic outdoor environments from a moving vehicle equipped with laser sensor and odometry. To correct vehicle location from odometry we introduce a new fast implementation of incremental scan matching method that can work reliably in dynamic outdoor environments. After a good vehicle location is estimated, the surrounding map is updated incrementally and moving objects are detected without a priori knowledge of the targets. Detected moving objects are finally tracked using Global Nearest Neighborhood (GNN) method. The experimental results on dataset collected from INTERSAFE-2 demonstrator for typical scenario show the effectiveness of this technique

    Fast classification of static and dynamic environment for Bayesian Occupancy Filter (BOF)

    No full text
    International audienceIn this paper we present a fast motion detection technique based on laser data and odometry/imu information. This technique instead of performing a complete SLAM (Simultaneous Localization and Mapping) solution, is based on transferring occupancy information between two consecutive data grids. We plan to use the output of this work for Bayesian Occupancy Filter (BOF) framework to reduce processing time and improve the results of subsequent clustering and tracking algorithm, based on BOF. Experimental results obtained from a real demonstrator vehicle show the effectiveness of our techniqu
    corecore